16 research outputs found

    Video dataset of human demonstrations of folding clothing for robotic folding

    Get PDF
    General-purpose clothes-folding robots do not yet exist owing to the deformable nature of textiles, making it hard to engineer manipulation pipelines or learn this task. In order to accelerate research for the learning of the robotic clothes folding task, we introduce a video dataset of human folding demonstrations. In total, we provide 8.5 hours of demonstrations from multiple perspectives leading to 1,000 folding samples of different types of textiles. The demonstrations are recorded in multiple public places, in different conditions with a diverse set of people. Our dataset consists of anonymized RGB images, depth frames, skeleton keypoint trajectories, and object labels. In this article, we describe our recording setup, the data format, and utility scripts, which can be accessed at https://adverley.github.io/folding-demonstrations

    Simpler learning of robotic manipulation of clothing by utilizing DIY smart textile technology

    Get PDF
    Deformable objects such as ropes, wires, and clothing are omnipresent in society and industry but are little researched in robotics research. This is due to the infinite amount of possible state configurations caused by the deformations of the deformable object. Engineered approaches try to cope with this by implementing highly complex operations in order to estimate the state of the deformable object. This complexity can be circumvented by utilizing learning-based approaches, such as reinforcement learning, which can deal with the intrinsic high-dimensional state space of deformable objects. However, the reward function in reinforcement learning needs to measure the state configuration of the highly deformable object. Vision-based reward functions are difficult to implement, given the high dimensionality of the state and complex dynamic behavior. In this work, we propose the consideration of concepts beyond vision and incorporate other modalities which can be extracted from deformable objects. By integrating tactile sensor cells into a textile piece, proprioceptive capabilities are gained that are valuable as they provide a reward function to a reinforcement learning agent. We demonstrate on a low-cost dual robotic arm setup that a physical agent can learn on a single CPU core to fold a rectangular patch of textile in the real world based on a learned reward function from tactile information

    Resampling methods for parameter-free and robust feature selection with mutual information

    Get PDF
    Combining the mutual information criterion with a forward feature selection strategy offers a good trade-off between optimality of the selected feature subset and computation time. However, it requires to set the parameter(s) of the mutual information estimator and to determine when to halt the forward procedure. These two choices are difficult to make because, as the dimensionality of the subset increases, the estimation of the mutual information becomes less and less reliable. This paper proposes to use resampling methods, a K-fold cross-validation and the permutation test, to address both issues. The resampling methods bring information about the variance of the estimator, information which can then be used to automatically set the parameter and to calculate a threshold to stop the forward procedure. The procedure is illustrated on a synthetic dataset as well as on real-world examples

    Van de Graaff generator for capillary electrophoresis

    Get PDF
    A new approach for high voltage capillary electrophoresis (CE) is proposed, which replaces the standard high voltage power supply with a Van de Graaff generator, a low current power source. Because the Van de Graaff generator is a current-limited source (10 μA), potentials exceeding 100 kV can be generated for CE when the electrical resistance of the capillary is maximized. This was achieved by decreasing the capillary diameter and reducing the buffer ionic strength. Using 2 mM borate buffer and a 5 μm i.d. capillary, fluorescently labeled amino acids were separated with efficiencies up to 3.5 million plates; a 5.7 fold improvement in separation efficiency compared to a normal power supply (NPS) typically used in CE. This separation efficiency was realized using a simple set-up without significant Joule heating, making the Van de Graaff generator a promising alternative for applying the high potentials required for enhancing resolution in the separation and analysis of highly complex samples, for example mixtures of glycans

    Learning robotic cloth manipulation

    No full text
    Endowing robots with dexterous manipulation skills could spur economic welfare, create leisure time in society, reduce harmful manual labour, and provide care for an ageing population. However, while robots are producing our cars, we are still left to our own devices for doing the laundry at home. This shortcoming is due to the major difficulties in perceiving and handling the variability the real world presents. Robots in modern manufacturing require engineers to produce a safe and predictable environment: objects arrive at the same location, in the same orientation, and the robot is preprogrammed to perform a specific manipulation. Unfortunately, the need for a predictable environment is undesirable in dynamic environments that handle a wide range of objects, often in the presence of human activity. For example, should a human worker first unfold all clothes such that a robot can easily find the corner points and perform folding? Indeed, the high variability in modern production environments and households requires robots to handle objects that can take arbitrary shapes, weights, and configurations. This diversity renders traditional robotic control algorithms and grippers unsuitable for deployment in dynamic environments. To find methods that can handle the ever-changing nature of human environments, we study the perception and manipulation of objects that provide an infinite amount of variations: deformable objects. A deformable object changes shape on force interaction. Deformable objects are omnipresent in industry and society: food, paper, clothes, fruit, cables and sutures, among others. In particular, we study the task of automating the folding of clothes. Folding clothes is a common household task that will potentially be performed by service robots in the future. Handling cloth is also relevant in manufacturing, where technical textile is processed, and in the fashion industry. Dealing with the deformable nature of textiles requires fundamental improvements in both hardware and software. Mechanical engineering needs to incorporate actuators, links, joints and sensors into the limited space of a hand while using soft materials similar to the human skin. In addition to engineering more capable hands, control algorithms need to loosen assumptions about the environment in which robots operate. It is unattainable to expect highly deformable objects like cloth to always be in the same configuration before manipulating them with a robot. A solution for dealing with real-world variability can be found in the machine learning field. In particular, deep RLcombines the function approximation capabilities of deep neural networks with the learn by trial-and-error formalism provided by RL. Deep RL has shown to be capable of driving cars, flying helicopters and manipulating rigid objects. However, the data requirements for training highly parameterized functions, like neural networks, are considerable. This data-hungry property causes an incongruity between the representational learning features of deep neural networks and the high cost of generating real robotic trials. Our research focuses on reducing the required learning data for systems that perceive and manipulate clothing items. We implement a cloth simulation method to generate synthetic data, utilize smart textiles for state estimation of cloth, crowdsource a dataset of people folding clothing, and propose a method to estimate how well people are folding clothing without providing labels. Actuating a physical robot is slow, expensive and potentially dangerous. For this reason, roboticists resort to physics simulators that simulate the robot's and environment's dynamics. However, there exists no integrated robot and cloth simulator for use in learning experiments. Cloth simulators are built for offline render farms in the film industry, or for the game industry that sacrifices fidelity for real-time rendering. Unfortunately, cloth simulation for robotic learning requires performance characteristics similar to online rendering and accuracy aspects found in offline rendering. For this reason, we implement a custom cloth dynamics simulation on GPU and integrate it in the robotic simulation functionality of the Unity game engine. We found that we can utilize deep RL to train an agent in our simulation to fold a rectangular piece of cloth twice within 24 hours of wall time on standard computational hardware. The developed cloth simulation assumes full accessibility to the state of the cloth. However, state estimation of cloth in the real world relies on complex vision-based pipelines or high-cost sensing technology. We avoid this complexity and cost by integrating inexpensive tactile sensing technology into a cloth. The cloth becomes an active smart cloth by training a classifier that uses the tactile sensing data to estimate its state. We use this smart cloth to train a low-cost robotic platform to fold the cloth using RL. Our results demonstrate that it is possible to develop a smart cloth with off-the-shelf components and use it effectively for training on a real robotic platform. Our smart cloth bridges the gap between our cloth simulation on GPU and state estimation in the real world. However, it is still required to distil a scalar value that indicates task progress in order to acquire manipulation skills using RL. We believe that learning the reward function from demonstrations may overcome human bias in reward engineering. Unfortunately, when starting our research there existed no large dataset with people folding clothing. We fill this gap by crowdsourcing a dataset of people folding clothes. Our dataset consists of roughly 300000 multiperspective RGB-D frames, annotated with pose trajectories, quality labels and timestamps indicating substeps. This dataset can be used to benchmark research in action recognition and bootstrap learning by using example demonstrations. Learning from demonstrations is a prevalent domain in the robotics learning community. However, using our cloth folding dataset requires mapping the movements of demonstrators to the embodiment of a robot. Additionally, behavioural cloning is prone to blindly imitating trajectories instead of understanding how actions relate to solving the task. For this reason, estimating how well a process is being executed is preferred to learning the policy from demonstrations. Unfortunately, existing methods couple the learning of rewards with policy learning, thereby inheriting all problems associated with RL. To decouple reward and policy learning, we propose a method to learn the task progression from multiperspective videos of example demonstrations. We avoid incorporating human bias in the labelling process by using time as a self-supervised signal for learning. We demonstrate the first results on expressing task progression of people folding clothing, without labelling any data. General-purpose robots are not yet among us. Robots that are capable of working in dynamic environments will require a holistic view of software and hardware. We demonstrated the benefits of this approach by outsourcing the intelligence for state estimation to the cloth instead of the robot. By developing a smart cloth, we trained a robot to fold cloth in-vivo within a day. Extrapolating an integrated approach on hardware and software leads to embodied intelligence in which morphology closes the loop with control: co-optimizing the body and brain will allow evolving manipulators tailored to the tasks, and use them to build a representation of how the world works. Robots can use that feedback to understand how actions influence the environment and learn to solve tasks by using human examples, instrumented objects and their own experiences. This holistic process will enable future robots to understand human intent and solve a large repertoire of manipulation tasks

    UnfoldIR : tactile robotic unfolding of cloth

    No full text
    Robotic unfolding of cloth is challenging due to the wide range of textile materials and their ability to deform in unpredictable ways. Previous work has focused almost exclusively on visual feedback to solve this task. We present UnfoldIR ("unfolder"), a dual-arm robotic system relying on infrared (IR) tactile sensing and cloth manipulation heuristics to achieve in-air unfolding of randomly crumpled rectangular textiles by means of edge tracing. The system achieves >> 85% coverage on multiple textiles of different sizes and textures. After unfolding, at least three corners are visible in 83.3 up to 94.7% of cases. Given these strong "tactile-only" results, we argue that the fusion of both tactile and visual sensing can bring cloth unfolding to a new level of performance

    Learning self-supervised task progression metrics : a case of cloth folding

    No full text
    An important challenge for smart manufacturing systems is finding relevant metrics that capture task quality and progression for process monitoring to ensure process reliability and safety. Data-driven process metrics construct features and labels from abundant raw process data, which incurs costs and inaccuracies due to the labelling process. In this work, we circumvent expensive process data labelling by distilling the task intent from video demonstrations. We present a method to express the task intent in the form of a scalar value by aligning a self-supervised learned embedding to a small set of high-quality task demonstrations. We evaluate our method on the challenging case of monitoring the progress of people folding clothing. We demonstrate that our approach effectively learns to represent task progression without manually labelling sub-steps or progress in the videos. Using case-based experiments, we find that our method learns task-relevant features and useful invariances, making it robust to noise, distractors and variations in the task and shirts. The experimental results show that the proposed method can monitor processes in domains where state representation is inherently challenging

    Valpreventie bij thuiswonende ouderen in de huisartspraktijk

    No full text
    Context: Vallen is een belangrijk probleem in de oudere populatie met vaak problemen op fysiek, psychosociaal en economisch vlak. In de preventie van valincidenten is een belangrijke rol weggelegd voor de huisarts. Om de arts hierin te ondersteunen zijn, zowel in binnen-als buitenland, reeds richtlijnen voor valpreventie gepubliceerd. In deze richtlijnen worden multifactoriële preventieprogramma’s voorgesteld als de beste manier om valincidenten bij thuiswonende ouderen te voorkomen. Onderzoeksvraag: Leidt het invoeren van een multifactoriële evaluatie en interventie bij thuiswonende 65- plussers met een verhoogd valrisico tot een verminderd aantal valincidenten in vergelijking met de huidige praktijkvoering ? Welke moeilijkheden ervaren we als artsen bij het implementeren van een richtlijn valpreventie in de dagelijkse praktijk? Wat weten huisartsen in de regio over valpreventie in de huisartsenpraktijk? Hoe doen de huisartsen momenteel aan valpreventie in hun praktijk? Welke barrières ondervinden huisartsen bij het toepassen van valpreventie in hun praktijk? Methode: Voorafgaand aan het praktijkproject werd een literatuuronderzoek uitgevoerd naar de problematiek van vallen bij ouderen en hoe valincidenten voorkomen kunnen worden. In het project zelf werd de bestaande Praktijkrichtlijn Valpreventie Vlaanderen voor thuiswonende ouderen van het Expertisecentrum Valpreventie Vlaanderen toegepast in de eigen praktijk. Hierbij werd een multifactoriële evaluatie en interventie opgezet bij thuiswonende ouderen met een verhoogd valrisico en het effect ervan op het aantal valincidenten geëvalueerd. Daarnaast werden de huisartsen uit de regio bevraagd naar hun kennis over de problematiek van vallen bij ouderen en over de preventie ervan. Resultaten: 74 patiënten werden onderzocht naar de aanwezigheid van een verhoogd valrisico. Dit was aanwezig bij 36 van hen. Via de randomisatie kwamen 18 personen terecht in controle-en interventiegroep; in de controlegroep viel 1 patiënt uit , in de interventiegroep waren het er 5. Na 3 maanden follow up waren in de interventiegroep nog 6 patiënten gevallen (46,1%); in de controlegroep 3 (17,6 %). Statistische analyse toonde aan dat het hier ging om een statistisch niet-significant verschil ( p-waarde 0,198905). De enquête werd verstuurd naar 117 artsen; 23 onder hen vulden de volledige vragenlijst in (response ratio 19,6%). Huisartsen in de regio kennen en erkennen de problematiek van vallen bij ouderen en zien een belangrijke rol voor hen weggelegd in de preventie ervan. Toch bespreken ze het probleem vaak pas met hun patiënt na een doorgemaakte val (secundaire valpreventie). Gebrek aan motivatie bij de oudere patiënt enerzijds, gebrek aan tijd en kennis bij de arts anderzijds worden aangehaald als de belangrijkste barrières om aan valpreventie te doen. Conclusies: 4 Multifactoriële programma’s voor valpreventie hebben als voornaamste doel om het aantal valincidenten bij ouderen te verminderen. In dit praktijkproject hebben we dit echter niet kunnen realiseren. Grootschaliger onderzoek in meerdere praktijken zal nodig zijn om het effect van dergelijke programma’s te evalueren. Daarnaast dient verder onderzoek te gebeuren naar hoe ouderen gemotiveerd kunnen worden om hun eigen valrisico aan te pakken en hoe huisartsen valpreventie beter kunnen integreren in hun dagelijkse praktijk.status: publishe

    Comparison Of Local And Global Undirected Graphical Models ∗

    Get PDF
    Abstract. CRFs are discriminative undirected models which are globally normalized. Global normalization preserves CRFs from the label bias problem which most local models suffer from. Recently proposed co-occurrence rate networks (CRNs) are also discriminative undirected models. In contrast to CRFs, CRNs are locally normalized. It was established that CRNs are immune to the label bias problem even they are local models. In this paper, we further compare ECRNs (using fully empirical relative frequencies, not by support vector regression 1) and CRFs. The connection between Co-occurrence Rate, which is the exponential function of pointwise mutual information, and Copulas is built in continuous case. Also they are further evaluated statistically by experiments.
    corecore